Gradient Boost

Before moving forward with the to-do list, let’s throw a Random Forest to it.

Gradient boost

For many reasons, Random Forest is usually a very good baseline model. In this particular case I started with the polynomial OLS as baseline model, just because it was so evident from the correlations that the relationship between temperature and consumption follows a polynomial shape. But let’s go back to a beloved RF.

Model Cards provide a framework for transparent, responsible reporting. 
 Use the vetiver `.qmd` Quarto template as a place to start, 
 with vetiver.model_card()
Writing pin:
Name: 'wd-gb'
Version: 20250218T092204Z-cb670
<vetiver.vetiver_model.VetiverModel at 0x7fb1f8722450>

Metrics

Single Split CV
train test test train
MAE - Mean Absolute Error 1.455522 2.070607 2.376637 1.259215
MSE - Mean Squared Error 4.087286 10.792548 24.346584 3.043919
RMSE - Root Mean Squared Error 2.021704 3.285201 4.168450 1.742897
R2 - Coefficient of Determination 0.960836 0.869472 -0.999837 0.968742
MAPE - Mean Absolute Percentage Error 0.129685 0.189615 0.233271 0.114140
EVS - Explained Variance Score 0.960836 0.876914 -0.259789 0.968742
MeAE - Median Absolute Error 1.050771 1.189949 1.445549 0.919366
D2 - D2 Absolute Error Score 0.801409 0.697048 0.010447 0.822846
Pinball - Mean Pinball Loss 0.727761 1.035303 1.188318 0.629608

Scatter plot matrix

Observed vs. Predicted and Residuals vs. Predicted

Check for …

check the residuals to assess the goodness of fit.

  • white noise or is there a pattern?
  • heteroscedasticity?
  • non-linearity?

Normality of Residuals:

Check for …

  • Are residuals normally distributed?

Leverage

Scale-Location plot

Residuals Autocorrelation Plot

Residuals vs Time

Again, overfits a lot.

Parameter: param_model__learning_rate

Parameter: param_model__max_depth

Parameter: param_model__min_samples_leaf

Parameter: param_model__min_samples_split

Parameter: param_model__n_estimators

Parameter: param_model__subsample

Parameter: param_vars__columns

Best model

{'model__learning_rate': 0.1,
 'model__max_depth': 5,
 'model__min_samples_leaf': 5,
 'model__min_samples_split': 48,
 'model__n_estimators': 60,
 'model__subsample': 1,
 'vars__columns': ['rf_tu_mean', 'vp_std_mean']}
Pipeline(steps=[('vars', ColumnSelector(columns=['rf_tu_mean', 'vp_std_mean'])),
                ('model',
                 GradientBoostingRegressor(max_depth=5, min_samples_leaf=5,
                                           min_samples_split=48,
                                           n_estimators=60, random_state=7,
                                           subsample=1))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

Metrics

Single Split CV
train test test train
MAE - Mean Absolute Error 1.635478 2.135416 2.335653 1.482139
MSE - Mean Squared Error 7.224902 11.382125 22.804778 4.750869
RMSE - Root Mean Squared Error 2.687918 3.373740 3.968774 2.178939
R2 - Coefficient of Determination 0.930771 0.862341 -0.786301 0.951360
MAPE - Mean Absolute Percentage Error 0.134744 0.199109 0.247968 0.124583
EVS - Explained Variance Score 0.930771 0.869597 0.061267 0.951360
MeAE - Median Absolute Error 1.053507 1.286427 1.513283 0.990079
D2 - D2 Absolute Error Score 0.776856 0.687566 -0.020406 0.791522
Pinball - Mean Pinball Loss 0.817739 1.067708 1.167827 0.741069

Scatter plot matrix

Observed vs. Predicted and Residuals vs. Predicted

Check for …

check the residuals to assess the goodness of fit.

  • white noise or is there a pattern?
  • heteroscedasticity?
  • non-linearity?

Normality of Residuals:

Check for …

  • Are residuals normally distributed?

Leverage

Scale-Location plot

Residuals Autocorrelation Plot

Residuals vs Time

Compare vanilla vs. tuned

Metrics

Single split

Metrics based on the test set of the single split

Cross validation

Predictions, residuals, observed

next

Time vs. Predicted and Observed

Time vs. Residuals

Model details

Pipeline(steps=[('vars',
                 ColumnSelector(columns=['tt_tu_mean', 'rf_tu_mean', 'td_mean',
                                         'vp_std_mean', 'tf_std_mean'])),
                ('model', GradientBoostingRegressor(random_state=7))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Pipeline(steps=[('vars', ColumnSelector(columns=['rf_tu_mean', 'vp_std_mean'])),
                ('model',
                 GradientBoostingRegressor(max_depth=5, min_samples_leaf=5,
                                           min_samples_split=48,
                                           n_estimators=60, random_state=7,
                                           subsample=1))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

TODOs